Activating 
project at `~/Desktop/Projects/summer-schools/Wharton-Summer-2025/Notebook`

The goal of this section is to describe the core functions and document the way Toni organized her code. The functions can largely be divded into three categories. First, the model solver discretizes the state space, sets up the transition matrix, and solves the model. Second, the simulation subroutines simulate the model and collect statistics. Finally, the estimation prepares the data for estimation and runs the estimation routine.

These are some set-ups that will be necessary for the code to run.

# Think about which file to include this code in eventually.
using Random, Distributions, LinearAlgebra, Plots, Statistics, Printf

1. Model Solver

The goal in this part is to solve the dynamic model via value function iteration (VFI). The code organization is slightly redundant and chaotic in my humble opinion, so I include an overview here.

Toni has a code script for almost every single function – this is way too many files! It would be better to group them either into (1) fewer files containing multiple functions, or into (2) separate sub-folders containing individual functions. But to be fair, her readme file is very useful.

The code file valfun.jl performs the VFI.

A separate file called collectfunctions.jl calls the above valfun.jl and performs some plotting, interpolation, simulation, and moment generation. To see the entire list of relevant files, please unfold the following code block.

Show Code
# # ============ Stuff that goes into VFI ====================
# include("makegrids.jl")          # makes the grids
# include("tauchen.jl")            # makes the transition matrix
# include("profit.jl")             # makes the profit function
# include("fillin.jl")             # interpolates the value function. Inefficient. You should make the weights only once. 
# include("maxbellman.jl")         # finds the maximum of the bellman equation
# include("tinybellman.jl")        # finds the maximum of the bellman equation, within limited bounbds
# include("howard.jl")             #  Howard's policy improvement algorithm
# include("makepol.jl")            #  makes the policy function from the index policy function
# include("makeconst.jl")          #  renormalized the profit function
# include("ptrue.jl")              #  sets the true parameter values
# include("mew.jl")                #  makes the stationary distribution. 
# include("inbetween.jl")          #  makes interpolation weights and indices
# include("extract_parameters.jl") # little helper function to pull parameters out of a vector
# include("update.jl")             # updates the value function

# # ============ VFI =========================================
# if settings.fast == true
#     include("valfun.jl")          # The fast version
# else
#     include("valfunslow.jl")      # The slow version
# end
# # ============ Plotting and simulation =====================
# include("plotstuff.jl")    # Self-explanatory
# include("printstuff.jl")   # Self-explanatory
# include("simmodel.jl")     # Simulates the model. 
# include("interpol.jl")     # Interpolates the value and policy functions

# # ============Making moments ===============================
# include("momentgen.jl");   # wrapper
# include("makemoments.jl"); # Self-explanatory

The values for the compuational parameters are separately defined in invest_mod.jl. Using these specific values, Toni then runs all of these files, in another file called evalfun.jl. As the name implies, this is where the actual function evaluation is done. This produces the simulated moments for a single parameter combination \((\alpha, \delta)\) and the associated comparative statistics outputs.

Discretization

The relevant code files for this step are tauchen.jl and makegrids.jl.

Recall that the process that governs the evolution of shock \(z\) is given as: \[ \ln(z') = \rho \ln(z) + \sigma \varepsilon', \quad \varepsilon' \sim \mathcal{N}(0, 1) \]

Using the Tauchen method, we can approximate this continuous state space with a discrete Markov chain. The following function implements the Tauchen method to generate a grid of \(z\) values and the corresponding transition probabilities. The input parameters for this function are: the autocorrelation ρ = 0.7, intercept of the AR1 μ = 0, standard deviation of the residual σ = 0.2, number of standard deviations q = 3, and the number of grid points N_z = 41.

Show Code
function tauchen(mew::Float64, sigma::Float64, rho::Float64, znum::Int, q::Float64)

    zstar = mew / (1.0 - rho) #expected value of z
    sigmaz = sigma / sqrt(1.0 - rho^2) #stddev of z

    z = zstar .+ collect(range(-q * sigmaz, stop=q * sigmaz, length=znum))


    trans = zeros(znum, znum)
    w = (z[2] - z[1])  #Note that all the points are equidistant by construction.
    for iz in 1:znum
        for izz in 1:znum
            binhi = (z[izz] - rho * z[iz] + w / 2.0) / sigma
            binlo = (z[izz] - rho * z[iz] - w / 2.0) / sigma
            if izz == 1
                trans[iz, izz] = cdf(Normal(), binhi)
            elseif izz == znum
                trans[iz, izz] = 1.0 - cdf(Normal(), binlo)
            else
                trans[iz, izz] = cdf(Normal(), binhi) - cdf(Normal(), binlo)
            end
        end
    end

    return z::Vector{Float64}, trans::Matrix{Float64}
end

The output is a vector of \(z\) values and a matrix of transition probabilities. Throughout the code, we refer to them as z_grid and trans, respectively.

Show Code
# Tauchen parameters
N_z = 41   # shock grid number
σ = 0.20   # SD of error term
μ = 0.0    # AR1 intercept
ρ = 0.70   # AR1 coefficient
q = 3.0    # 3 standard deviations for the z

z_grid, trans = tauchen(μ, σ, ρ, N_z, q);
z_grid = exp.(z_grid);

We discretize the \(z\) shock to 41 points.

print(z_grid)
[0.4316379803912925, 0.45015664900394275, 0.46946982853260255, 0.48961160607115856, 0.5106175311603266, 0.5325246785313611, 0.5553717135416735, 0.579198960417851, 0.6040484734265217, 0.6299641110986802, 0.656991613638479, 0.6851786836531056, 0.7145750703462349, 0.7452326573236532, 0.7772055541660273, 0.8105501919304436, 0.8453254227492717, 0.8815926237021424, 0.9194158051443708, 0.9588617236830193, 1.0, 1.0429032417301707, 1.0876471716112988, 1.1343107611320749, 1.182976369914058, 1.2337298910735608, 1.2866609028200269, 1.3418628265584742, 1.3994330917750424, 1.4594733079966673, 1.52208944412838, 1.5873920154847612, 1.6554962788456467, 1.7265224358803595, 1.8005958452994977, 1.877847244108723, 1.9584129783550546, 2.0424352437729247, 2.1300623367547344, 2.2214489160888555, 2.316756275927041]

The transition matrix is a 41 x 41 matrix, where element \((i,j)\) represents the probability of moving from state \(z_i\) to state \(z_j\).

size(trans)
(41, 41)

Next, we discretize the capital stock space, so that later on, our policy function contains integers indexing different \(K\) values. The upper and lower bounds on \(K\) are obtained by plugging in the minimum and maximum values of \(z\) into the steady state equation. Note that the Euler equation is given as: \[ K = (\frac{r+δ}{αz})^{1-α}\] which provides solutions to the equation that sets the marginal product of capital equal to the user cost of capital.

We discretize \(K\) into 501 points. Here is an example from the valfunclunky code script, where the risk free rate rate is 4%, capital depreciation rate δ = 0.15, curvature parameter α = 0.70, and equity adjustment cost λ = 0.1.

Show Code
# Dimensions
N_k = 501
N_k_pol = 501

# Risk free rate and discount factor
rf = 0.04
β = 1.0 / (1.0 + rf)

# Parameters
δ = 0.15   # capital depreciation
α = 0.70   # curvature parameter
λ = 0.1    # equity adjustment cost

##### Discretize capital 
zmax = maximum(z_grid)
zmin = minimum(z_grid)
skale = 0.2 # to make steady state capital stock reasonable

# Euler: plug in min & max of z to get bounds on k
kminstar = ((rf + δ) /* skale * zmin))^(1.0 /- 1.0))
kminstar = log(kminstar)

kmaxstar = ((rf + δ) /* skale * zmax))^(1.0 /- 1.0))
kmaxstar = log(kmaxstar)

# Create k grid (and take exponents back)
k_grid = collect(range(kminstar, stop=kmaxstar, length=N_k))
k_grid = exp.(k_grid)

# Create k' grid (policy grid)
k_pol_grid = collect(range(kminstar, stop=kmaxstar, length=N_k_pol))
k_pol_grid = exp.(k_pol_grid);

println("Length of the policy grid is ", length(k_pol_grid))
Length of the policy grid is 501

Profit Function

Recall that the firm’s objective is to maximize the present value of cash flows to shareholders. For convenience, let’s just call this the profit function. As already discussed in the Model section, the profit function \(E(\cdot)\) is defined piecewise according to the sign of internal cash flows \(E*(\cdot)\):

\[ \begin{align*} E(K, K', z) = \begin{cases} E^* & \text{if } E^* \geq 0 \\ E^*(1 + \lambda) & \text{if } E^* < 0 \end{cases} \end{align*} \] where \[ E^*(K, K', z) = zK^{\alpha} - (K' - (1 - \delta)K). \]

The profit function has three input parameters: current capital stock K, next period’s capital stock K', and profitability shock z. Since these variables have all been discretized, we can fully capture the profit function in a 3D array. The following code fills in this array with the profit function values for different (z, K, K') combinations.

Show Code
# Initialize profit array
profit = zeros(N_z, N_k, N_k_pol)

# Loop over N_k_pol = 501 values of K' ...
Threads.@threads for ik_pol in 1:N_k_pol

    # ... and N_k = 501 values of K
    for ik in 1:N_k

        # ... and N_z = 41 values of z
        for iz in 1:N_z

            # Profit = internal cash flow if positive
            profit[iz, ik, ik_pol] =
                skale * z_grid[iz] * k_grid[ik]^α -
                (k_pol_grid[ik_pol] - (1.0 - δ) * k_grid[ik])

            # If negative, pay financing costs λ%    
            if profit[iz, ik, ik_pol] < 0.0
                profit[iz, ik, ik_pol] =
                    profit[iz, ik, ik_pol] * (1.0 + λ)
            end
        end
    end
end

println("Size of the profit array is ", size(profit))
Size of the profit array is (41, 501, 501)

A nested parallel loop likely causes too much overhead, so I only parallelize the outer loop here.

Bellman Equation

Next, we formulate the Bellman equation and numerically solve the maximization problem. The key relevant files are valfun.jl and maxbellman.jl, and there are a few more files that implement more advanced computational methods. inbetween.jl is for interpolation in a set-up where policy grid is finer than the capital grid (i.e. N_k_pol > N_k). howard.jl implements Howard’s policy improvement algorithm, which is a more efficient way to solve the Bellman equation in earlier iterations of the VFI and when the policy function has already converged. update.jl uses McQueen-Porteus algorithm to update the policy and value functions. More details can be found in the summer school slides.

The core of the VFI is to keep updating the value function until it converges to a numerical maximum. To do so, we keep iterating on the Bellman equation until the maximum difference between the current and previous value function is less than a given tolerance level. So, at each iteration, we are really comparing the best value function we have found so far, against the next candidate maximum.

The criter

2. Simulation

3. Estimation